Sparsity-Specific Code Optimization using Expression Trees

نویسندگان

چکیده

We introduce a code generator that converts unoptimized C++ operating on sparse data into vectorized and parallel CPU or GPU kernels. Our approach unrolls the computation massive expression graph, performs redundant elimination, grouping, then generates an architecture-specific kernel to solve same problem, assuming sparsity pattern is fixed, which common scenario in many applications computer graphics scientific computing. show our scales large problems can achieve speedups of two orders magnitude CPUs three GPUs, compared set manually optimized baselines. To demonstrate practical applicability approach, we employ it optimize popular algorithms with physical simulation interactive mesh deformation.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Prediction of blood cancer using leukemia gene expression data and sparsity-based gene selection methods

Background: DNA microarray is a useful technology that simultaneously assesses the expression of thousands of genes. It can be utilized for the detection of cancer types and cancer biomarkers. This study aimed to predict blood cancer using leukemia gene expression data and a robust ℓ2,p-norm sparsity-based gene selection method. Materials and Methods: In this descriptive study, the microarray ...

متن کامل

Fast Fourier Optimization Sparsity Matters

Many interesting and fundamentally practical optimization problems, ranging from optics, to signal processing, to radar and acoustics, involve constraints on the Fourier transform of a function. It is well-known that the fast Fourier transform (fft) is a recursive algorithm that can dramatically improve the efficiency for computing the discrete Fourier transform. However, because it is recursiv...

متن کامل

Online Linear Optimization with Sparsity

Now, let us consider the case ofK = Kb with b ∈ (1,∞). For v ∈ R andQ ⊆ [d], let vQ denote the 6 projection of v to those dimensions inQ. Then for any v ∈ R, and any w ∈ Kb withQ = {i : wi 6= 7 0}, we know by Hölder’s inequality that 〈w,v〉 = 〈wQ,vQ〉 ≥ −‖w‖b · ‖vQ‖a , for a = b/(b− 1). 8 Moreover, one can have 〈wQ,vQ〉 = −‖w‖b · ‖vQ‖a , when |wi| /‖w‖b = |vi|/‖v‖a and 9 wivi ≤ 0 for every i ∈ Q. ...

متن کامل

Optimization with Sparsity-Inducing Penalties

Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appr...

متن کامل

Trading Accuracy for Sparsity in Optimization Problems with Sparsity Constraints

We study the problem of minimizing the expected loss of a linear predictor while constraining its sparsity, i.e., bounding the number of features used by the predictor. While the resulting optimization problem is generally NP-hard, several approximation algorithms are considered. We analyze the performance of these algorithms, focusing on the characterization of the trade-off between accuracy a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: ACM Transactions on Graphics

سال: 2022

ISSN: ['0730-0301', '1557-7368']

DOI: https://doi.org/10.1145/3520484